Applicability of approximate multipliers in hardware neural networks
نویسندگان
چکیده
In recent years there has been a growing interest in hardware neural networks, which express many benefits over conventional software models, mainly in applications where speed, cost, reliability, or energy efficiency are of great importance. These hardware neural networks require many resource-, powerand time-consuming multiplication operations, thus special care must be taken during their design. Since the neural network processing can be performed in parallel, there is usually a requirement for designs with as many concurrent multiplication circuits as possible. One option to achieve this goal is to replace the complex exact multiplying circuits with simpler, approximate ones. The present work demonstrates the application of approximate multiplying circuits in the design of a feed-forward neural network model with on-chip learning ability. The experiments performed on a heterogeneous PROBEN1 benchmark dataset show that the adaptive nature of the neural network model successfully compensates for the calculation errors of the approximate multiplying circuits. At the same time, the proposed designs also profit from more computing power and increased energy efficiency. & 2012 Elsevier B.V. All rights reserved.
منابع مشابه
Stabilization of Nonlinear Control Systems through Using Zobov’s Theorem and Neural Networks
Zobov’s Theorem is one of the theorems which indicate the conditions for the stability of a nonlinear system with specific attraction region. We have applied neural networks to approximate some functions mentioned in Zobov’s theorem in order to find the controller of a nonlinear controlled system whose law in a mathematical manner is difficult to make. Finally, the effectiveness and the applica...
متن کاملA Comparative Approximate Economic Behavior Analysis Of Support Vector Machines And Neural Networks Models
متن کامل
On the Universal Approximation Property and Equivalence of Stochastic Computing-based Neural Networks and Binary Neural Networks
Large-scale deep neural networks are both memoryintensive and computation-intensive, thereby posing stringent requirements on the computing platforms. Hardware accelerations of deep neural networks have been extensively investigated in both industry and academia. Specific forms of binary neural networks (BNNs) and stochastic computingbased neural networks (SCNNs) are particularly appealing to h...
متن کاملImplementation of High Performace Multipliers Based on Approximate Compressor Design
Estimating arithmetic is a design paradigm for DSP hardware. By allowing structurally incomplete arithmetic circuits to occasionally perform imprecise calculations, higher performance can be achieved in many different electronic systems. This paper presents a potential useful approach to implement tree multipliers by using estimating arithmetic. Experimental results show the applicability and e...
متن کاملSolving nonlinear Lane-Emden type equations with unsupervised combined artificial neural networks
In this paper we propose a method for solving some well-known classes of Lane-Emden type equations which are nonlinear ordinary differential equations on the semi-innite domain. The proposed approach is based on an Unsupervised Combined Articial Neural Networks (UCANN) method. Firstly, The trial solutions of the differential equations are written in the form of feed-forward neural networks cont...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- Neurocomputing
دوره 96 شماره
صفحات -
تاریخ انتشار 2012